对计算的需求仍在呈指数增长。这种增长将转化为计算能源消耗的指数增长,除非其能源效率的提高可以超过其需求增加。然而,经过数十年的研究,由于已经进行了高度优化,因此进一步提高能源效率变得越来越具有挑战性。结果,在某个时候,计算需求的增加可能会超过其能源效率的增加,这可能会大大增加。这种指数增长(如果不受组织)将把计算定位为全球碳排放的重要贡献者。尽管著名的技术公司已经意识到了这一问题并试图减少其碳排放,但可以理解的是,他们的成功是可以无意间传达出现在或很快就会解决问题的错误印象的潜力。如果这种错误的印象有助于阻止在这一领域进行进一步研究,因为我们讨论了消除计算机,而且更普遍地社会的碳排放远非解决问题。为了更好地理解问题的范围,本文提炼了决定计算的碳足迹及其对实现可持续计算的影响的基本趋势。
translated by 谷歌翻译
In the recent years, various gradient descent algorithms including the methods of gradient descent, gradient descent with momentum, adaptive gradient (AdaGrad), root-mean-square propagation (RMSProp) and adaptive moment estimation (Adam) have been applied to the parameter optimization of several deep learning models with higher accuracies or lower errors. These optimization algorithms may need to set the values of several hyperparameters which include a learning rate, momentum coefficients, etc. Furthermore, the convergence speed and solution accuracy may be influenced by the values of hyperparameters. Therefore, this study proposes an analytical framework to use mathematical models for analyzing the mean error of each objective function based on various gradient descent algorithms. Moreover, the suitable value of each hyperparameter could be determined by minimizing the mean error. The principles of hyperparameter value setting have been generalized based on analysis results for model optimization. The experimental results show that higher efficiency convergences and lower errors can be obtained by the proposed method.
translated by 谷歌翻译
The reward hypothesis posits that, "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." We aim to fully settle this hypothesis. This will not conclude with a simple affirmation or refutation, but rather specify completely the implicit requirements on goals and purposes under which the hypothesis holds.
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Text classification is a natural language processing (NLP) task relevant to many commercial applications, like e-commerce and customer service. Naturally, classifying such excerpts accurately often represents a challenge, due to intrinsic language aspects, like irony and nuance. To accomplish this task, one must provide a robust numerical representation for documents, a process known as embedding. Embedding represents a key NLP field nowadays, having faced a significant advance in the last decade, especially after the introduction of the word-to-vector concept and the popularization of Deep Learning models for solving NLP tasks, including Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformer-based Language Models (TLMs). Despite the impressive achievements in this field, the literature coverage regarding generating embeddings for Brazilian Portuguese texts is scarce, especially when considering commercial user reviews. Therefore, this work aims to provide a comprehensive experimental study of embedding approaches targeting a binary sentiment classification of user reviews in Brazilian Portuguese. This study includes from classical (Bag-of-Words) to state-of-the-art (Transformer-based) NLP models. The methods are evaluated with five open-source databases with pre-defined data partitions made available in an open digital repository to encourage reproducibility. The Fine-tuned TLMs achieved the best results for all cases, being followed by the Feature-based TLM, LSTM, and CNN, with alternate ranks, depending on the database under analysis.
translated by 谷歌翻译
Accurate and consistent vehicle localization in urban areas is challenging due to the large-scale and complicated environments. In this paper, we propose onlineFGO, a novel time-centric graph-optimization-based localization method that fuses multiple sensor measurements with the continuous-time trajectory representation for vehicle localization tasks. We generalize the graph construction independent of any spatial sensor measurements by creating the states deterministically on time. As the trajectory representation in continuous-time enables querying states at arbitrary times, incoming sensor measurements can be factorized on the graph without requiring state alignment. We integrate different GNSS observations: pseudorange, deltarange, and time-differenced carrier phase (TDCP) to ensure global reference and fuse the relative motion from a LiDAR-odometry to improve the localization consistency while GNSS observations are not available. Experiments on general performance, effects of different factors, and hyper-parameter settings are conducted in a real-world measurement campaign in Aachen city that contains different urban scenarios. Our results show an average 2D error of 0.99m and consistent state estimation in urban scenarios.
translated by 谷歌翻译
地下水位预测是一个应用时间序列预测任务,具有重要的社会影响,以优化水管理以及防止某些自然灾害:例如,洪水或严重的干旱。在文献中已经报告了机器学习方法以实现这项任务,但它们仅专注于单个位置的地下水水平的预测。一种全球预测方法旨在利用从各个位置的地下水级时序列序列,一次在一个地方或一次在几个地方产生预测。鉴于全球预测方法在著名的竞争中取得了成功,因此在地下水级别的预测上进行评估并查看它们与本地方法的比较是有意义的。在这项工作中,我们创建了一个1026地下水级时序列的数据集。每个时间序列都是由每日测量地下水水平和两个外源变量,降雨和蒸散量制成的。该数据集可向社区提供可重现性和进一步评估。为了确定最佳的配置,可以有效地预测完整的时间序列的地下水水平,我们比较了包括本地和全球时间序列预测方法在内的不同预测因子。我们评估了外源变量的影响。我们的结果分析表明,通过训练过去的地下水位和降雨数据的全球方法获得最佳预测。
translated by 谷歌翻译
在多语言甚至单语言中鉴定的模型的零拍跨语言能力刺激了许多假设,以解释这一有趣的经验结果。但是,由于预处理的成本,大多数研究都使用公共模型的公共模型,其预处理方法(例如代币化,语料库规模和计算预算的选择)可能会大不相同。当研究人员对自己的模型预识时,他们通常会在预算有限的情况下这样做,并且与SOTA模型相比,最终的模型的表现可能明显不足。这些实验差异导致有关这些模型跨语性能力的性质的各种不一致的结论。为了帮助对该主题进行进一步研究,我们发布了10个单语字节级模型,并在相同的配置下进行了严格审慎的概述,并具有大型计算预算(相当于V100的420天)和Corpora,比原始BERT大4倍。由于它们不含令牌,因此消除了看不见的令牌嵌入的问题,从而使研究人员可以在具有不同脚本的语言中尝试更广泛的跨语言实验。此外,我们释放了在不自然语言文本上预测的两个模型,这些模型可用于理智检查实验。关于质量检查和NLI任务的实验表明,我们的单语模型实现了多语言的竞争性能,因此可以加强我们对语言模型中跨语性可传递性的理解。
translated by 谷歌翻译
遗传算法(GA)是基于遗传学和自然选择原理的基于搜索的优化技术。我们提出了一种算法,该算法通过量子退火器的输入来增强经典GA。与经典GA一样,该算法通过根据其适应性繁殖一系列可能的解决方案来工作。但是,个体的人口是由量子退火器上的连续耦合来定义的,然后通过量子退火产生代表尝试溶液的相应表型。这将定向突变的一种形式引入算法中,可以以各种方式增强其性能。两种关键的增强功能来自具有从父母的适应性(所谓的裙带关系)和退火耦合的连续耦合,从而使整个人群受到最合适的人(所谓的量子量子化)的影响。我们发现我们的算法在几个简单问题上比经典GA更强大。
translated by 谷歌翻译
我们介绍了IST和Unmabel对WMT 2022关于质量估计(QE)的共享任务的共同贡献。我们的团队参与了所有三个子任务:(i)句子和单词级质量预测;(ii)可解释的量化宽松;(iii)关键错误检测。对于所有任务,我们在彗星框架之上构建,将其与OpenKIWI的预测估计架构连接,并为其配备单词级序列标记器和解释提取器。我们的结果表明,在预处理过程中合并参考可以改善下游任务上多种语言对的性能,并且通过句子和单词级别的目标共同培训可以进一步提高。此外,将注意力和梯度信息结合在一起被证明是提取句子级量化量化宽松模型的良好解释的首要策略。总体而言,我们的意见书在几乎所有语言对的所有三个任务中都取得了最佳的结果。
translated by 谷歌翻译